32 research outputs found

    Optimization in Virtual Machine Networking

    Get PDF
    Network performance is a critical aspect in Virtual Machine systems and its importance is becoming increasingly important in the world of computing. These systems are commonly employed in the IT departments of several organizations, since they can help to build services that are highly reliabile, availabile and secure, improve efficiency in computing resource usage, and so on. In this thesis we are going to analize the state of the art of virtual machine networking, evaluating advantages and drawbacks of the existing solutions. We then propose a new approach, showing that with a small amount of code modifications, we can bring a classic emulated network device (we take \texttt{e1000} as a reference example) to performances that are similar to the performances of paravirtualized solutions. However, this is not enough to push the performance to the limit (expecially latency). Therefore, we put together the lessons learned, and introduce a new minimal paravirtualized solution, that can be implemented in total with about 2400 lines of code (driver part and board emulation part) and it is intended to outperform the currently existing solutions

    A Study of I/O Performance of Virtual Machines

    Get PDF
    In this study, we investigate some counterintuitive but frequent performance issues that arise when doing high-speed networking (or I/O in general) with Virtual Machines (VMs). VMs use one or more single-producer/single-consumer systems to exchange I/O data (e.g. network packets) with their hypervisor. We show that when the producer and the consumer process packets at different rates, the high cost required for synchronization (interrupts and ‘kicks’) may reduce throughput of the system well below the slowest of the two parties; moreover, accelerating the faster party may cause the throughput to decrease. Our work provides a model for throughput, efficiency and latency of producer/consumer systems when notifications or sleeping are used as a synchronization mechanism; identifies different operating regimes depending on the operating parameters; validates the accuracy of our model against a VirtIO-based prototype, taking into account most of the details of real-world deployments; provides practical and robust strategies to maximize throughput and minimize energy while keeping the latency under control, without depending on precise timing measurements nor unreasonable assumptions on the system’s behavior. The study is particularly interesting for Network Function Virtualization deployments, where high-rate producer/consumer systems in virtualized environments are the core components

    Flexible virtual machine networking using netmap passthrough

    Get PDF
    The rising interest in Network Function Virtualization (NFV) requires Virtual Machines (VMs) to operate with diversified networking workloads, from traditional, bulk TCP transfers to novel ones featuring extremely high packet rates. In response, researchers have explored and proposed new solutions for high performance VM networking, including optimizations to virtual network adapters (such as VirtIO) to support high speed bulk traffic, and alternative frameworks for userspace networking and physical or virtual passthrough. To date, we are still missing a comprehensive solution that supports such extreme workloads across multiple operating systems and hypervisors, while at the same time addressing other requirements such as ease of configuration, operating system independence, scalability and isolation. In this paper we present ptnet, an approach to network I/O virtualization that provides high performance for both traditional TCP/IP and high packet rate applications. ptnet leverages the features of the netmap framework (including virtualization and passthrough support), and defines a simple yet performant network device model that can be easily supported in different operating systems and hypervisors. We prove the effectiveness of our approach by comparing ptnet's performance with one of the state of the art I/O virtualization solutions, namely VirtIO on Linux and QEμKVM. ptnet is available under a BSD license as part of the netmap distributions on github

    Very high speed link emulation with TLEM

    Get PDF
    In this work we discuss the limitations of link emulators based on conventional network stacks, and present our alternative architecture called TLEM, which is designed to address current high speed links and be open to future speed improvements. TLEM is structured as a pipeline of stages, implemented with separate threads and with limited interactions with each other, so that high performance can be achieved. Our emulator can handle bidirectional traffic at speeds of over 18 Mpps (64 byte packets) and 40 Gbit/s (1500 byte packets) per direction even with large emulation delays. Even higher performance can be achieved with shorter delays, as the workload fits better into the L3 cache of the system. TLEM is distributed as BSD-licensed opensource as part of the netmap distributions, and runs on any system that supports netmap (this includes FreeBSD, Linux and now even Windows)

    Reducing the complexity of virtual machine networking

    Get PDF
    Virtualization is an enabling technology that improves scalability, reliability, and flexibility. Virtualized networking is tackled by emulating or paravirtualizing network interface cards. This approach, however, leads to complexities (implementation and management) and has to conform to some limitations imposed by the Ethernet standard. RINA turns the current approach to virtualized networking on its head: instead of emulating networks to perform inter-process communication on a single processing system, it sees networking as an extension to local inter-process communication. In this article, we show how RINA can leverage a paravirtualization approach to achieve a more manageable solution for virtualized networking. We also present experimental results performed on IRATI, the reference open source implementation of RINA, which shows the potential performance that can be achieved by deploying our solution

    ARCFIRE : experimentation with the recursive InterNetwork Architecture

    Get PDF
    European funded research into the Recursive Inter-Network Architecture (RINA) started with IRATI, which developed an initial prototype implementation for OS/Linux. IRATI was quickly succeeded by the PRISTINE project, which developed different policies, each tailored to specific use cases. Both projects were development-driven, where most experimentation was limited to unit testing and smaller scale integration testing. In order to assess the viability of RINA as an alternative to current network technologies, larger scale experimental deployments are needed. The opportunity arose for a project that shifted focus from development towards experimentation, leveraging Europe's investment in Future Internet Research and Experimentation (FIRE+) infrastructures. The ARCFIRE project took this next step, developing a user-friendly framework for automating RINA experiments. This paper reports and discusses the implications of the experimental results achieved by the ARCFIRE project, using open source RINA implementations deployed on FIRE+ Testbeds. Experiments analyze the properties of RINA relevant to fast network recovery, network renumbering, Quality of Service, distributed mobility management, and network management. Results highlight RINA properties that can greatly simplify the deployment and management of real-world networks; hence, the next steps should be focused on addressing very specific use cases with complete network RINA-based networking solutions that can be transferred to the market

    Mutual Guarantee Institutions (MGIs) and small business credit during the crisis

    Get PDF
    The recent economic and financial crisis has drawn attention to how mutual guarantee institutions (MGIs) facilitate small and medium enterprises in accessing bank financing. The aim of this paper is twofold. First, we describe the structural features of the Italian market for mutual guarantees and its significance for small business credit. To this end, we use extensive databases (the Central Credit Register and the Central Balance Sheet Register) as well as specific surveys, which allow us to fill information gaps about this industry and to quantify regional diversity. Second, we investigate whether MGIs’ support to small firms continued to be effective in 2008-09, when credit constraints to Italian firms peaked. We find that MGIs played a role in avoiding a break-up in credit flows to affiliated firms, which also benefited from a lower cost of credit. However, this came at the cost of a deterioration in credit quality, which was more intense for customers with guarantees from MGIs.microfinance, peer monitoring, small business finance

    Impact of COVID-19 on cardiovascular testing in the United States versus the rest of the world

    Get PDF
    Objectives: This study sought to quantify and compare the decline in volumes of cardiovascular procedures between the United States and non-US institutions during the early phase of the coronavirus disease-2019 (COVID-19) pandemic. Background: The COVID-19 pandemic has disrupted the care of many non-COVID-19 illnesses. Reductions in diagnostic cardiovascular testing around the world have led to concerns over the implications of reduced testing for cardiovascular disease (CVD) morbidity and mortality. Methods: Data were submitted to the INCAPS-COVID (International Atomic Energy Agency Non-Invasive Cardiology Protocols Study of COVID-19), a multinational registry comprising 909 institutions in 108 countries (including 155 facilities in 40 U.S. states), assessing the impact of the COVID-19 pandemic on volumes of diagnostic cardiovascular procedures. Data were obtained for April 2020 and compared with volumes of baseline procedures from March 2019. We compared laboratory characteristics, practices, and procedure volumes between U.S. and non-U.S. facilities and between U.S. geographic regions and identified factors associated with volume reduction in the United States. Results: Reductions in the volumes of procedures in the United States were similar to those in non-U.S. facilities (68% vs. 63%, respectively; p = 0.237), although U.S. facilities reported greater reductions in invasive coronary angiography (69% vs. 53%, respectively; p < 0.001). Significantly more U.S. facilities reported increased use of telehealth and patient screening measures than non-U.S. facilities, such as temperature checks, symptom screenings, and COVID-19 testing. Reductions in volumes of procedures differed between U.S. regions, with larger declines observed in the Northeast (76%) and Midwest (74%) than in the South (62%) and West (44%). Prevalence of COVID-19, staff redeployments, outpatient centers, and urban centers were associated with greater reductions in volume in U.S. facilities in a multivariable analysis. Conclusions: We observed marked reductions in U.S. cardiovascular testing in the early phase of the pandemic and significant variability between U.S. regions. The association between reductions of volumes and COVID-19 prevalence in the United States highlighted the need for proactive efforts to maintain access to cardiovascular testing in areas most affected by outbreaks of COVID-19 infection

    Enhanced network processing in the Cloud Computing era

    No full text
    Cloud Computing has radically changed the way we look at computing hardware resources, with server machines hosting tens to thousands of possibly unrelated applications belonging to different customers. Virtualization technologies provide those isolation guarantees that are necessary for untrusted applications to coexist within the same physical machine. Although hardware support for virtualization is essential to achieve acceptable performance, the success of Cloud Computing is still largely attributable to a wide range of software products, frameworks and libraries that complete and enhance the virtualization capabilities directly provided by the hardware. In particular, Cloud hosts often need to process in software huge amounts of network traffic on behalf of Virtual Machines and application containers, typically attached to a common virtual switch. Even with the many CPUs available on modern machines (100 and beyond), software packet processing at very high speeds (i.e., 1-100 Mpps) may be challenging because of the communication overhead incurred by the processing pipeline when using basic mechanisms such as inter-process notifications, sleeps, or lock-free queues. The current literature lacks of models to analyze these low level mechanisms in depth and identify suitable guidelines for the design of high-speed processing systems. This thesis presents a thorough discussion of the impact that queues and synchronization mechanisms have on the performance of I/O processing pipelines, for all the possible operating regimes, and with a particular focus on networking and virtualization. Models for throughput, latency and energy efficiency of producer-consumer systems are introduced and validated experimentally to verify that they are appropriate to represent real systems. Several fast Single Producer Single Consumer queues are discussed and characterized in terms of interactions with the cache coherence system, with two of them being original contributions of this thesis. As an application of how these basic mechanisms can be used in practice, a novel high-performance packet scheduling architecture, suitable for Data Centers, is presented and experimentally validated, showing a better throughput, latency and isolation than current solutions. Overall, the analysis presented in this thesis provides suggestions and guidelines to design efficient software datapath components for virtual network switches, hypervisor backends for device emulation, and other I/O processing components relevant to Cloud Computing environments
    corecore